186 research outputs found

    Measuring the Functional Size of Real-Time and Embedded Software: a Comparison of Function Point Analysis and COSMIC

    Get PDF
    The most widely used methods and tools for estimating the cost of software development require that the functional size of the program to be developed be measured, either in \u201ctraditional\u201d Function Points or in COSMIC Function Points. The latter were proposed to solve some shortcomings of the former, including not being well suited for representing the functionality of real-time and embedded software. However, little evidence exists to support the claim that COSMIC Function Points are better suited than traditional Function Points for the measurement of real-time and embedded applications. Our goal is to compare how well the two methods can be used in functional measurement of real-time and embedded systems. We applied both measurement methods to a number of situations that occur quite often in real-time and embedded software. Our results seem to indicate that, overall, COSMIC Function Points are better suited than traditional Function Points for measuring characteristic features of real-time and embedded systems. Our results also provide practitioners with useful indications about the pros and cons of functional size measurement methods when confronted with specific features of real-time and embedded software

    On the Ability of Functional Size Measurement Methods to Size Complex Software Applications

    Get PDF
    The most popular Functional Size Measurement methods, namely IFPUG Function Point Analysis and the COSMIC method, adopt a concept of \u201cfunctionality\u201d that is based mainly on the data involved in functions and data movements. Neither of the mentioned methods takes directly into consideration the amount of data processing involved in a process. Functional size measures are often used as a basis for estimating the effort required for software development, and it is known that development effort does depend on the amount of data processing code to be written. Thus, it is interesting to investigate to what extent the most popular functional size measures represent the functional processing features of requirements and, consequently, the amount of data processing code to be written. To this end, we consider a few applications that provide similar functionality, but require different amounts of data processing. These applications are then measured via both functional size measurement methods and traditional size measures (such as Lines of Code). A comparison of the obtained measures shows that differences among the applications are best represented by differences in Lines of Code. It is likely that the actual size of an application that requires substantial amounts of data processing is not fully represented by functional size measures. In summary, the paper shows that not taking into account data processing dramatically limits the expressiveness of the size measures. Practitioners that use size measures for effort estimation should complement functional size measures with measures that quantify data processing, to get precise effort estimates

    Comparing φ and the F-measure as Performance Metrics for Software-related Classifications

    Get PDF
    Context The F-measure has been widely used as a performance metric when selecting binary classifiers for prediction, but it has also been widely criticized, especially given the availability of alternatives such as φ (also known as Matthews Correlation Coefficient). Objectives Our goals are to (1) investigate possible issues related to the F-measure in depth and show how φ can address them, and (2) explore the relationships between the F-measure and φ. Method Based on the definitions of φ and the F-measure, we derive a few mathematical properties of these two performance metrics and of the relationships between them. To demonstrate the practical effects of these mathematical properties, we illustrate the outcomes of an empirical study involving 70 Empirical Software Engineering datasets and 837 classifiers. Results We show that φ can be defined as a function of Precision and Recall, which are the only two performance metrics used to define the F-measure, and the rate of actually positive software modules in a dataset. Also, φ can be expressed as a function of the F-measure and the rates of actual and estimated positive software modules. We derive the minimum and maximum value of φ for any given value of the F-measure, and the conditions under which both the F-measure and φ rank two classifiers in the same order. Conclusions Our results show that φ is a sensible and useful metric for assessing the performance of binary classifiers. We also recommend that the F-measure should not be used by itself to assess the performance of a classifier, but that the rate of positives should always be specified as well, at least to assess if and to what extent a classifier performs better than random classification. The mathematical relationships described here can also be used to reinterpret the conclusions of previously published papers that relied mainly on the F-measure as a performance metric

    Using Function Point Analysis and COSMIC for Measuring the Functional Size of Real-Time and Embedded Software: a Comparison

    Get PDF
    Function Points Analysis and the COSMIC method are very often used for measuring the functional size of programs. The COSMIC method was proposed to solve some shortcomings of Function Points, including not being well suited for representing the functionality of real-time and embedded software. However, little evidence exists to support the claim that COSMIC Function Points are better suited than traditional Function Points for the measurement of real-time and embedded applications. To help practitioner choose a method for measuring real-time or embedded software, some evidence of the merits and shortcomings of the two methods is needed. Accordingly, our goal is to compare how well the two methods can be used in the functional measurement of real-time and embedded systems. To this end, we applied both measurement methods to the situations that occur quite often in real-time and embedded software and are not considered by standard measurement practices. Our results indicate that, overall, COSMIC Function Points are better suited than traditional Function Points for measuring characteristic features of real-time and embedded systems

    An Empirical Evaluation of Effort Prediction Models Based on Functional Size Measures

    Get PDF
    Software development effort estimation is among the most interesting issues for project managers, since reliable estimates are at the base of good planning and project control. Several different techniques have been proposed for effort estimation, and practitioners need evidence, based on which they can choose accurate estimation methods. The work reported here aims at evaluating the accuracy of software development effort estimates that can be obtained via popular techniques, such as those using regression models and those based on analogy. The functional size and the development effort of twenty software development projects were measured, and the resulting dataset was used to derive effort estimation models and evaluate their accuracy. Our data analysis shows that estimation based on the closest analogues provides better results for most models, but very bad estimates in a few cases. To mitigate this behavior, the correction of regression toward the mean proved effective. According to the results of our analysis, it is advisable that regression to the mean correction is used when the estimates are based on closest analogues. Once corrected, the accuracy of analogy-based estimation is not substantially different from the accuracy of regression based models

    An operational process for goal-driven definition of measures

    Full text link

    Software Measures for Business Processes

    Get PDF
    Designing a business process, which is executed by a Workflow Management System, recalls the activity of writing software source code, which is executed by a computer. Different business processes may have different qualities, e.g., size, structural complexity, some of which can be measured based on the formal descriptions of the business processes. This paper defines measures for quantifying business process qualities by drawing on concepts that have been used for defining measures for software code. Specifically, the measures we propose and apply to business processes are related to attributes of activities, control-flow, data-flow, and resources. This allows the business process designer to have a comprehensive evaluation of business processes according to several different attributes

    Empirical Ground-Motion Prediction Equations for Northern Italy Using Weak- and Strong-Motion Amplitudes, Frequency Content, and Duration Parameters

    Get PDF
    The goals of this work are to review the Northern-Italy ground-motion prediction equations (GMPEs) for amplitude parameters and to propose new GMPEs for frequency content and duration parameters. Approximately 10,000 weak and strong waveforms have been collected merging information from different neighboring regional seismic networks operating in the last 30 yr throughout Northern Italy. New ground-motion models, calibrated for epicentral distances ≤100 km and for both local (ML) and moment magnitude (Mw), have been developed starting from a high quality dataset (624 waveforms) that consists of 82 selected earthquakes with ML and Mw up to 6.3 and 6.5, respectively. The vertical component and the maximum of the two horizontal components of motion have been considered, for both acceleration (peak ground horizontal acceleration [PGHA] and peak ground vertical acceleration [PGVA]) and velocity (peak ground horizontal velocity [PGHV] and peak ground vertical velocity [PGVV]) data. In order to make comparisons with the most commonly used prediction equations for the Italian territory (Sabetta and Pugliese, 1996 [hereafter, SP96] and Ambraseys et al. 2005a,b [hereafter, AM05]) the coefficients for acceleration response spectra (spectral horizontal acceleration [SHA] and spectral vertical acceleration [SVA]) and for pseudovelocity response spectra (pseudospectral horizontal velocity [PSHV] and pseudospectral vertical velocity [PSVV]) have been calculated for 12 periods ranging between 0.04 and 2 sec and for 14 periods ranging between 0.04 and 4 sec, respectively. Finally, empirical relations for Arias intensities (IA), Housner intensities (IH), and strong motion duration (DV) have also been calibrated. The site classification based on Eurocode (hereafter, EC8) classes has been used (ENV, 1998, 2002). The coefficients of the models have been determined using functional forms with an independent magnitude decay rate and applying the random effects model (Abrahamson and Youngs, 1992; Joyner and Boore, 1993) that allow the determination of the interevent, interstation, and record-to-record components of variance. The goodness of fit between observed and predicted values has been evaluated using the maximum likelihood approach as in Spudich et al. (1999). Comparing the proposed GMPEs with SP96 and AM05, it is possible to observe a faster decay of predicted ground motion, in particular for distances greater than 25 km and magnitudes higher than 5.0. The result is an improvement in fit of about one order of size for magnitudes spanning from 3.5 to 4.5

    Empirical ground motion prediction equations for northern italy using weak and strong motion amplitudes, frequency content and duration parameters

    Get PDF
    The aims of this work are to review the Northern-Italy ground motion prediction equations (hereinafter GMPEs) for amplitude parameters and to propose new GMPEs for frequency content and duration parameters. Approximately 10.000 weak and strong waveforms have been collected merging information from different neighbouring regional seismic networks operating in the last 30 years throughout Northern Italy. New ground motion models, calibrated for epicentral distances ≤ 100 km and for both local (Ml) and moment magnitude (Mw), have been developed starting from a high quality dataset (624 waveforms) which consists of 82 selected earthquakes with Ml and Mw up to 6.3 and 6.5 respectively. The vertical component and the maximum of the two horizontal components of motion have been considered, for both acceleration (PGHA and PGVA) and velocity (PGHV and PGVV) data. In order to make comparisons with the most commonly used prediction equations for the Italian territory (Sabetta and Pugliese, 1996 and Ambraseys et al. 2005a,b hereinafter named SP96 and AM05) the coefficients for acceleration response spectra (SHA and SVA) and for pseudo velocity response spectra (PSHV and PSVV) have been calculated for 12 periods ranging between 0.04 s and 2 s and for 14 periods ranging between 0.04 s and 4 s respectively. Finally, empirical relations for Arias and Housner Intensities (IA, IH) and strong motion duration (DV) have also been calibrated. The site classification based on Eurocode (hereinafter EC8) classes has been used (ENV, 1998). The coefficients of the models have been determined using functional forms with an independent magnitude decay rate and applying the random effects model (Abrahamson and Youngs, 1992; Joyner and Boore, 1993) that allow the determination of the inter-event, inter-station and record-to-record components of variance. The goodness of fit between observed and predicted values has been evaluated using the maximum likelihood approach as in Spudich et al. (1999). Comparing the proposed GMPEs both with SP96 and AM05 it is possible to observe a faster decay of predicted ground motion, in particular for distances greater than 25 km and magnitudes higher than 5.0. The result is a fit improvement of about one order of size for magnitudes spanning from 3.5 to 4.5
    • …
    corecore